Nakagami distribution#

The Nakagami distribution (nakagami) is a continuous distribution on \(x \ge 0\) used heavily in wireless communications to model the envelope (magnitude) of a fading signal. It is particularly convenient because its squared value is Gamma-distributed.


1) Title & classification#

Item

Value

Name

Nakagami (nakagami)

Type

Continuous

Support

\(x \in [0,\infty)\)

Parameters

shape \(m \ge 1/2\), spread \(\Omega > 0\)

Common reparameterization

scale \(s=\sqrt{\Omega}\) (SciPy’s scale)

What you’ll be able to do after this notebook#

  • recognize when a Nakagami model makes sense (fading envelopes, positive magnitudes)

  • write the PDF/CDF and compute key moments (including skewness/kurtosis)

  • interpret how \((m,\Omega)\) affects shape and scale

  • derive mean/variance and the log-likelihood (including MLE equations)

  • sample from a Nakagami distribution using NumPy only via a Gamma sampler

  • use scipy.stats.nakagami for pdf, cdf, rvs, and fit

import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio

from scipy import stats
from scipy.optimize import root_scalar
from scipy.special import gammainc, gammaln, hyp1f1, psi

pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=4, suppress=True)

rng = np.random.default_rng(42)

2) Intuition & Motivation#

What it models#

Nakagami is most commonly used for nonnegative magnitudes (envelopes). In wireless fading, if a complex baseband channel coefficient is

\[ H = X e^{i\Phi}, \]

with random phase \(\Phi\), then the received amplitude is controlled by \(X \sim \mathrm{Nakagami}(m,\Omega)\).

  • \(m\) (“fading parameter”) controls how severe the fading is.

  • \(\Omega\) is the average power (in fact, \(\mathbb{E}[X^2]=\Omega\)).

Typical use cases#

  • Wireless communications: Nakagami-\(m\) fading envelopes and link-level simulations (often used as a flexible alternative to Rayleigh/Rician).

  • Radar/sonar: scattered amplitude models.

  • Imaging / texture: a flexible positive amplitude model when Rayleigh is too restrictive.

Relations to other distributions#

A key identity is that the power \(Y=X^2\) is Gamma-distributed:

\[ Y = X^2 \sim \mathrm{Gamma}\left(\text{shape}=m,\ \text{scale}=\frac{\Omega}{m}\right). \]

This link is the main computational advantage of the distribution.

Special cases / connections:

  • \(m=1/2\) gives the half-normal distribution (with scale \(\sigma=\sqrt{\Omega}\)).

  • \(m=1\) gives the Rayleigh distribution (with Rayleigh scale \(\sigma=\sqrt{\Omega/2}\)).

  • with \(m=k/2\) and \(\Omega=k\), you recover the chi distribution with \(k\) degrees of freedom.

  • as \(m\to\infty\), \(X\) concentrates near \(\sqrt{\Omega}\) (fading “disappears”).

3) Formal Definition#

We use parameters \((m,\Omega)\) with \(m \ge 1/2\) and \(\Omega>0\).

PDF#

For \(x \ge 0\),

\[ f(x; m, \Omega) = \frac{2 m^m}{\Gamma(m)\,\Omega^m}\, x^{2m-1}\,\exp\left(-\frac{m x^2}{\Omega}\right). \]

CDF#

Let \(\gamma(\cdot,\cdot)\) be the lower incomplete gamma function and \(P(a,z)=\gamma(a,z)/\Gamma(a)\) its regularized form. Then for \(x \ge 0\),

\[ F(x; m,\Omega) = P\left(m, \frac{m x^2}{\Omega}\right) = \frac{\gamma\left(m,\frac{m x^2}{\Omega}\right)}{\Gamma(m)}. \]

SciPy’s gammainc(m, z) implements the regularized \(P(m,z)\).

def _validate_nakagami_params(m, omega):
    m = float(m)
    omega = float(omega)
    if not (m >= 0.5):
        raise ValueError("m must be >= 0.5")
    if not (omega > 0):
        raise ValueError("omega must be > 0")
    return m, omega


def nakagami_logpdf(x, m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    x = np.asarray(x, dtype=float)

    out = np.full_like(x, -np.inf, dtype=float)

    pos = x > 0
    if np.any(pos):
        out[pos] = (
            np.log(2.0)
            + m * np.log(m)
            - gammaln(m)
            - m * np.log(omega)
            + (2.0 * m - 1.0) * np.log(x[pos])
            - (m / omega) * (x[pos] ** 2)
        )

    # Handle x==0 (only finite at m=1/2)
    zero = x == 0
    if np.any(zero) and m == 0.5:
        out[zero] = 0.5 * np.log(2.0) - 0.5 * np.log(np.pi) - 0.5 * np.log(omega)

    return out


def nakagami_pdf(x, m, omega):
    return np.exp(nakagami_logpdf(x, m, omega))


def nakagami_cdf(x, m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    x = np.asarray(x, dtype=float)

    out = np.zeros_like(x, dtype=float)
    nonneg = x >= 0
    if np.any(nonneg):
        t = (m / omega) * (x[nonneg] ** 2)
        out[nonneg] = gammainc(m, t)

    return out

4) Moments & Properties#

A very useful identity is \(Y=X^2 \sim \mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\).

Raw moments#

For any \(k>-2m\) (so the integral converges),

\[ \mathbb{E}[X^k] = \left(\frac{\Omega}{m}\right)^{k/2} \frac{\Gamma\left(m+\frac{k}{2}\right)}{\Gamma(m)}. \]

In particular,

  • \(\mathbb{E}[X] = \dfrac{\Gamma(m+1/2)}{\Gamma(m)}\sqrt{\dfrac{\Omega}{m}}\)

  • \(\mathbb{E}[X^2] = \Omega\)

  • \(\mathrm{Var}(X) = \Omega - \mathbb{E}[X]^2\)

The mode (for \(m>1/2\)) follows from maximizing the log-PDF:

\[ x_{\text{mode}} = \sqrt{\Omega\,\frac{m-1/2}{m}}. \]

Skewness and kurtosis#

Using the raw moments above (up to order 4) you can compute central moments and then

\[ \gamma_1 = \frac{\mu_3}{\sigma^3},\qquad \gamma_2 = \frac{\mu_4}{\sigma^4} \]

where \(\mu_r\) are central moments and \(\sigma^2=\mu_2\).

MGF / characteristic function#

Because the tail is Gaussian-like (\(\exp(-c x^2)\)), the MGF \(M_X(t)=\mathbb{E}[e^{tX}]\) exists for all real \(t\). A closed form uses the confluent hypergeometric function \({}_1F_1\):

\[ M_X(t) = {}_1F_1\left(m;\tfrac12; \frac{\Omega t^2}{4m}\right) + t\sqrt{\frac{\Omega}{m}}\frac{\Gamma(m+1/2)}{\Gamma(m)} \,{}_1F_1\left(m+\tfrac12;\tfrac32; \frac{\Omega t^2}{4m}\right). \]

The characteristic function \(\varphi_X(t)=\mathbb{E}[e^{itX}]\) is the same expression with \(t^2\) replaced by \(-t^2\) and the second term multiplied by \(i\).

For the power \(Y=X^2\) (Gamma), the MGF is simple:

\[ M_Y(t)=\left(1-\frac{\Omega}{m}t\right)^{-m},\qquad t < \frac{m}{\Omega}. \]

Entropy#

The differential entropy has a compact expression:

\[ h(X) = m - \log 2 + \log\Gamma(m) + \tfrac12\log\left(\frac{\Omega}{m}\right) - \left(m-\tfrac12\right)\psi(m), \]

where \(\psi\) is the digamma function.

def nakagami_raw_moment(k, m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    k = float(k)
    return (omega / m) ** (k / 2.0) * np.exp(gammaln(m + k / 2.0) - gammaln(m))


def nakagami_mean(m, omega):
    return nakagami_raw_moment(1.0, m, omega)


def nakagami_var(m, omega):
    mu = nakagami_mean(m, omega)
    return omega - mu**2


def nakagami_skew_kurt(m, omega):
    m1 = nakagami_raw_moment(1.0, m, omega)
    m2 = nakagami_raw_moment(2.0, m, omega)
    m3 = nakagami_raw_moment(3.0, m, omega)
    m4 = nakagami_raw_moment(4.0, m, omega)

    var = m2 - m1**2
    mu3 = m3 - 3.0 * m2 * m1 + 2.0 * m1**3
    mu4 = m4 - 4.0 * m3 * m1 + 6.0 * m2 * m1**2 - 3.0 * m1**4

    skew = mu3 / (var ** 1.5)
    kurt = mu4 / (var**2)
    return skew, kurt


def nakagami_mgf(t, m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    t = np.asarray(t, dtype=float)

    z = omega * (t**2) / (4.0 * m)
    gamma_ratio = np.exp(gammaln(m + 0.5) - gammaln(m))

    term1 = hyp1f1(m, 0.5, z)
    term2 = t * np.sqrt(omega / m) * gamma_ratio * hyp1f1(m + 0.5, 1.5, z)
    return term1 + term2


def nakagami_cf(t, m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    t = np.asarray(t, dtype=float)

    z = -omega * (t**2) / (4.0 * m)
    gamma_ratio = np.exp(gammaln(m + 0.5) - gammaln(m))

    term1 = hyp1f1(m, 0.5, z)
    term2 = 1j * t * np.sqrt(omega / m) * gamma_ratio * hyp1f1(m + 0.5, 1.5, z)
    return term1 + term2


def nakagami_entropy(m, omega):
    m, omega = _validate_nakagami_params(m, omega)
    return (
        m
        - np.log(2.0)
        + gammaln(m)
        + 0.5 * np.log(omega / m)
        - (m - 0.5) * psi(m)
    )


m_ex, omega_ex = 2.0, 1.5
mu_ex = nakagami_mean(m_ex, omega_ex)
var_ex = nakagami_var(m_ex, omega_ex)
skew_ex, kurt_ex = nakagami_skew_kurt(m_ex, omega_ex)
entropy_ex = nakagami_entropy(m_ex, omega_ex)

mu_ex, var_ex, skew_ex, kurt_ex, (kurt_ex - 3.0), entropy_ex
(1.1512425464397995,
 0.17464059926680608,
 0.40569507726266335,
 3.0592950893995936,
 0.059295089399593603,
 0.5288352805664636)

5) Parameter Interpretation#

The spread parameter \(\Omega\)#

\(\Omega\) is literally the second moment:

\[ \Omega = \mathbb{E}[X^2]. \]

Changing \(\Omega\) mostly rescales the distribution: if \(X\sim\mathrm{Nakagami}(m,\Omega)\) then \(X/\sqrt{\Omega}\sim\mathrm{Nakagami}(m,1)\).

The shape parameter \(m\)#

\(m\) controls how concentrated the distribution is away from 0:

  • \(m = 1/2\): half-normal (density is positive at 0).

  • \(m = 1\): Rayleigh (classical “fully developed” multipath fading envelope).

  • larger \(m\): less fading; the distribution becomes more peaked near \(\sqrt{\Omega}\).

  • as \(m\to\infty\): \(X\) concentrates near \(\sqrt{\Omega}\) (variance shrinks roughly like \(\Omega/(2m)\)).

# PDF: changing m (keep omega fixed)
omega = 1.0
ms = [0.5, 1.0, 2.0, 5.0]

x = np.linspace(0, 4.0, 600)

fig = go.Figure()
for m in ms:
    fig.add_trace(
        go.Scatter(
            x=x,
            y=nakagami_pdf(x, m, omega),
            mode="lines",
            name=f"m={m:g}",
        )
    )
fig.update_layout(
    title="Nakagami PDF: effect of shape m (Ω=1)",
    xaxis_title="x",
    yaxis_title="density",
)
fig.show()

# PDF: changing omega (keep m fixed)
m = 2.0
omegas = [0.5, 1.0, 2.0]

x = np.linspace(0, 6.0, 700)

fig = go.Figure()
for omega in omegas:
    fig.add_trace(
        go.Scatter(
            x=x,
            y=nakagami_pdf(x, m, omega),
            mode="lines",
            name=f"Ω={omega:g}",
        )
    )
fig.update_layout(
    title="Nakagami PDF: effect of Ω (m=2)",
    xaxis_title="x",
    yaxis_title="density",
)
fig.show()

6) Derivations#

Expectation (general moment)#

Using \(Y=X^2\),

\[ \mathbb{E}[X^k] = \mathbb{E}[Y^{k/2}] = \int_0^\infty y^{k/2}\,g(y)\,dy = \left(\frac{\Omega}{m}\right)^{k/2} \frac{\Gamma\left(m+\frac{k}{2}\right)}{\Gamma(m)}. \]

Plugging \(k=1\) yields the mean, and \(k=2\) yields \(\mathbb{E}[X^2]=\Omega\).

Variance#

\[ \mathrm{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2 = \Omega - \left(\frac{\Gamma(m+1/2)}{\Gamma(m)}\right)^2\frac{\Omega}{m}. \]

Likelihood (and MLE equations)#

For data \(x_i\ge 0\), the log-likelihood is

\[ \ell(m,\Omega) = n\big(\log 2 + m\log m - \log\Gamma(m) - m\log\Omega\big) + (2m-1)\sum_{i=1}^n\log x_i - \frac{m}{\Omega}\sum_{i=1}^n x_i^2. \]
  • Fixing \(m\), maximizing over \(\Omega\) gives the closed-form estimator \(\widehat{\Omega} = \frac{1}{n}\sum_i x_i^2\).

  • Plugging \(\widehat{\Omega}\) in and differentiating w.r.t. \(m\) leads to the standard Gamma shape equation (with \(y_i=x_i^2\)):

\[ \log m - \psi(m) = \log(\bar y) - \frac{1}{n}\sum_{i=1}^n\log y_i. \]

This must be solved numerically. Because the Nakagami model restricts \(m\ge 1/2\), the MLE may land on the boundary \(m=1/2\) if the unconstrained optimum is smaller.

def nakagami_loglik(x, m, omega):
    x = np.asarray(x, dtype=float)
    if np.any(x < 0):
        return -np.inf
    return float(np.sum(nakagami_logpdf(x, m, omega)))


def nakagami_mle(x):
    """Constrained MLE for (m, omega) with m>=0.5.

    Uses the Gamma shape MLE equation on y=x^2.
    """
    x = np.asarray(x, dtype=float)
    if np.any(x < 0):
        raise ValueError("data must be >= 0")

    y = x**2
    omega_hat = float(np.mean(y))
    if not (omega_hat > 0):
        return 0.5, omega_hat

    # Avoid log(0) if data contain exact zeros (rare for a continuous model)
    y_pos = y[y > 0]
    if y_pos.size == 0:
        return 0.5, omega_hat

    s = np.log(omega_hat) - float(np.mean(np.log(y_pos)))

    # If s ~ 0, the sample has extremely low dispersion; m -> infinity.
    if s <= 1e-12:
        return 1e6, omega_hat

    def g(m):
        return np.log(m) - psi(m) - s

    lo = 0.5
    if g(lo) <= 0:
        return lo, omega_hat

    hi = 1.0
    while g(hi) > 0 and hi < 1e6:
        hi *= 2.0

    if g(hi) > 0:
        return hi, omega_hat

    sol = root_scalar(g, bracket=(lo, hi), method="brentq")
    return float(sol.root), omega_hat


# Quick sanity check: recover parameters from synthetic data
m_true, omega_true = 2.0, 1.5
data = stats.nakagami(m_true, scale=np.sqrt(omega_true)).rvs(size=4000, random_state=rng)
m_hat, omega_hat = nakagami_mle(data)
m_hat, omega_hat
(2.055715894644666, 1.5069096363629064)

7) Sampling & Simulation (NumPy-only)#

The Gamma connection gives a simple sampler:

  1. Sample \(Y\sim\mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\).

  2. Return \(X = \sqrt{Y}\).

So the main task is sampling a Gamma with shape \(m\).

Marsaglia–Tsang sketch (Gamma sampler)#

For shape \(\alpha\ge 1\) (here \(\alpha=m\)):

  • Set \(d = \alpha - 1/3\), \(c = 1/\sqrt{9d}\).

  • Repeat:

    • draw \(Z\sim\mathcal{N}(0,1)\) and set \(V=(1+cZ)^3\).

    • draw \(U\sim\mathrm{Uniform}(0,1)\).

    • accept \(dV\) if a (cheap) squeeze test passes, otherwise accept using the log test.

For \(0<\alpha<1\) use the reduction:

  • If \(Y\sim\mathrm{Gamma}(\alpha+1,\theta)\) and \(U\sim\mathrm{Uniform}(0,1)\) independent, then \(X = Y\,U^{1/\alpha} \sim \mathrm{Gamma}(\alpha,\theta)\).

Below is a reference implementation using NumPy only.

def _gamma_rvs_mt_shape_ge_1(alpha, n, rng):
    """Marsaglia–Tsang sampler for Gamma(alpha, 1) with alpha >= 1."""
    d = alpha - 1.0 / 3.0
    c = 1.0 / np.sqrt(9.0 * d)

    out = np.empty(n, dtype=float)
    filled = 0
    while filled < n:
        m = n - filled

        z = rng.normal(size=m)
        v = (1.0 + c * z) ** 3

        valid = v > 0
        if not np.any(valid):
            continue

        z = z[valid]
        v = v[valid]
        u = rng.random(size=v.size)

        accept = (u < 1.0 - 0.0331 * (z**4)) | (
            np.log(u) < 0.5 * z**2 + d * (1.0 - v + np.log(v))
        )

        accepted = d * v[accept]
        k = accepted.size
        out[filled : filled + k] = accepted
        filled += k

    return out


def gamma_rvs_numpy(alpha, theta=1.0, size=1, rng=None):
    """Sample from Gamma(alpha, theta) using NumPy only."""
    if rng is None:
        rng = np.random.default_rng()

    alpha = float(alpha)
    theta = float(theta)
    if alpha <= 0 or theta <= 0:
        raise ValueError("alpha and theta must be > 0")

    size_tuple = (size,) if isinstance(size, int) else tuple(size)
    n = int(np.prod(size_tuple))

    if alpha >= 1:
        x = _gamma_rvs_mt_shape_ge_1(alpha, n, rng)
    else:
        # Boost shape to alpha+1 >= 1, then apply the U^(1/alpha) correction
        y = _gamma_rvs_mt_shape_ge_1(alpha + 1.0, n, rng)
        u = rng.random(size=n)
        x = y * (u ** (1.0 / alpha))

    return (x * theta).reshape(size_tuple)


def nakagami_rvs_numpy(m, omega=1.0, size=1, rng=None):
    """Sample from Nakagami(m, omega) using NumPy only."""
    m, omega = _validate_nakagami_params(m, omega)
    if rng is None:
        rng = np.random.default_rng()

    y = gamma_rvs_numpy(m, theta=omega / m, size=size, rng=rng)
    return np.sqrt(y)


# Smoke test: mean and second moment roughly match theory
m_test, omega_test = 2.0, 1.5
s = nakagami_rvs_numpy(m_test, omega_test, size=60_000, rng=rng)

s.mean(), nakagami_mean(m_test, omega_test), (s**2).mean(), omega_test
(1.1525436945219076, 1.1512425464397995, 1.5038743907493182, 1.5)

8) Visualization#

We’ll visualize:

  • the PDF and how it compares to a Monte Carlo histogram

  • the CDF and how it compares to an empirical CDF

We’ll use the NumPy-only sampler from above.

x = np.linspace(0, 4.0, 600)
omega = 1.0
ms = [0.5, 1.0, 2.0, 5.0]

fig_pdf = go.Figure()
for m in ms:
    fig_pdf.add_trace(
        go.Scatter(x=x, y=nakagami_pdf(x, m, omega), mode="lines", name=f"m={m:g}")
    )
fig_pdf.update_layout(
    title="Nakagami PDF (Ω=1)",
    xaxis_title="x",
    yaxis_title="density",
)
fig_pdf.show()

fig_cdf = go.Figure()
for m in ms:
    fig_cdf.add_trace(
        go.Scatter(x=x, y=nakagami_cdf(x, m, omega), mode="lines", name=f"m={m:g}")
    )
fig_cdf.update_layout(
    title="Nakagami CDF (Ω=1)",
    xaxis_title="x",
    yaxis_title="CDF",
)
fig_cdf.show()
def ecdf(samples):
    x = np.sort(np.asarray(samples))
    y = np.arange(1, x.size + 1) / x.size
    return x, y


m_viz, omega_viz = 2.0, 1.5
n_viz = 80_000

samples = nakagami_rvs_numpy(m_viz, omega_viz, size=n_viz, rng=rng)
x_max = float(np.quantile(samples, 0.995))
x_grid = np.linspace(0.0, x_max, 600)

# PDF + histogram
fig = go.Figure()
fig.add_trace(
    go.Histogram(
        x=samples,
        nbinsx=70,
        histnorm="probability density",
        name="Monte Carlo samples",
        opacity=0.55,
    )
)
fig.add_trace(
    go.Scatter(
        x=x_grid,
        y=nakagami_pdf(x_grid, m_viz, omega_viz),
        mode="lines",
        name="Theoretical PDF",
        line=dict(width=3),
    )
)
fig.update_layout(
    title=f"Nakagami(m={m_viz:g}, Ω={omega_viz:g}): histogram vs PDF",
    xaxis_title="x",
    yaxis_title="density",
    bargap=0.02,
)
fig.show()

# CDF + empirical CDF
xs, ys = ecdf(samples)

fig = go.Figure()
fig.add_trace(
    go.Scatter(
        x=x_grid,
        y=nakagami_cdf(x_grid, m_viz, omega_viz),
        mode="lines",
        name="Theoretical CDF",
        line=dict(width=3),
    )
)
fig.add_trace(
    go.Scatter(
        x=xs[::100],
        y=ys[::100],
        mode="markers",
        name="Empirical CDF (subsampled)",
        marker=dict(size=5),
    )
)
fig.update_layout(
    title=f"Nakagami(m={m_viz:g}, Ω={omega_viz:g}): empirical CDF vs CDF",
    xaxis_title="x",
    yaxis_title="CDF",
)
fig.show()

# Monte Carlo moment check
sample_mean = samples.mean()
sample_m2 = (samples**2).mean()
theory_mean = nakagami_mean(m_viz, omega_viz)
theory_m2 = omega_viz

sample_mean, theory_mean, sample_m2, theory_m2
(1.1524272744432116, 1.1512425464397995, 1.5024354377047782, 1.5)

9) SciPy Integration#

SciPy implements the Nakagami distribution as scipy.stats.nakagami.

Parameter mapping:

  • SciPy’s shape parameter is called nu and corresponds to our \(m\).

  • SciPy’s scale corresponds to \(s=\sqrt{\Omega}\), i.e. \(\Omega = \text{scale}^2\).

  • SciPy also includes a loc shift; for the standard Nakagami distribution you typically fix loc=0.

m_scipy, omega_scipy = 2.0, 1.5
dist = stats.nakagami(m_scipy, scale=np.sqrt(omega_scipy))

x = np.linspace(0, 4, 6)
np.c_[x, nakagami_pdf(x, m_scipy, omega_scipy), dist.pdf(x)]
array([[0.    , 0.    , 0.    ],
       [0.8   , 0.7755, 0.7755],
       [1.6   , 0.4796, 0.4796],
       [2.4   , 0.0227, 0.0227],
       [3.2   , 0.0001, 0.0001],
       [4.    , 0.    , 0.    ]])
# rvs
data = dist.rvs(size=5000, random_state=rng)

# fit (fix loc=0 to match the standard support)
nu_hat, loc_hat, scale_hat = stats.nakagami.fit(data, floc=0)
m_hat_scipy = nu_hat
omega_hat_scipy = scale_hat**2

# Compare with the (m, omega) MLE derived from the Gamma link
m_hat_mle, omega_hat_mle = nakagami_mle(data)

(m_hat_scipy, omega_hat_scipy), (m_hat_mle, omega_hat_mle)
((1.988644559230805, 1.4934785377454523),
 (1.988614048401301, 1.4934772333000144))

10) Statistical Use Cases#

Hypothesis testing#

A common question in fading models is whether the envelope is consistent with Rayleigh fading, i.e. \(m=1\).

  • \(H_0: m=1\) (Rayleigh)

  • \(H_1: m \ne 1\) (general Nakagami, \(m\ge 1/2\))

A likelihood-ratio test (LRT) uses

\[ \Lambda = 2\big(\ell(\widehat m, \widehat\Omega) - \ell(m_0, \widehat\Omega_0)\big), \]

and (asymptotically) compares \(\Lambda\) to a \(\chi^2_1\) distribution.

Bayesian modeling#

Working with the power \(Y=X^2\) is often easiest. If you treat \(m\) as known and write the Gamma model in rate form

\[ Y\mid\lambda \sim \mathrm{Gamma}(\text{shape}=m,\ \text{rate}=\lambda),\qquad \lambda = \frac{m}{\Omega}, \]

then a Gamma prior \(\lambda\sim\mathrm{Gamma}(a_0,b_0)\) is conjugate:

\[ \lambda\mid y_{1:n} \sim \mathrm{Gamma}\left(a_0 + nm,\ b_0 + \sum_i y_i\right). \]

If \(m\) is also unknown, the model is no longer conjugate; typical approaches are numerical optimization, MCMC, or variational inference.

Generative modeling#

A simple complex fading coefficient can be generated as

\[ H = X e^{i\Phi},\quad X\sim\mathrm{Nakagami}(m,\Omega),\ \Phi\sim\mathrm{Uniform}(0,2\pi). \]

This produces a circularly symmetric random complex coefficient with controllable envelope statistics.

# Example: LRT for H0: m=1 (Rayleigh) vs H1: m free
m_alt, omega_alt = 2.0, 1.0
x = nakagami_rvs_numpy(m_alt, omega_alt, size=3000, rng=rng)

# Null model: m fixed at 1, omega MLE is mean(x^2)
m0 = 1.0
omega0 = float(np.mean(x**2))
ll0 = nakagami_loglik(x, m0, omega0)

# Alternative: (m, omega) MLE
m1, omega1 = nakagami_mle(x)
ll1 = nakagami_loglik(x, m1, omega1)

lrt = 2.0 * (ll1 - ll0)
p_value = stats.chi2.sf(lrt, df=1)

(m1, omega1), lrt, p_value
((1.976324368518703, 1.0012587550407712),
 674.9860946700712,
 8.22779501891415e-149)
# Example: conjugate Bayesian update for omega when m is treated as known
y = x**2
m_known = 2.0

# Prior on lambda = m/omega (Gamma shape-rate)
a0, b0 = 2.0, 2.0

a_post = a0 + y.size * m_known
b_post = b0 + float(np.sum(y))

# Sample lambda ~ Gamma(shape=a_post, rate=b_post) and transform to omega = m/lambda
lambda_samps = rng.gamma(shape=a_post, scale=1.0 / b_post, size=50_000)
omega_samps = m_known / lambda_samps

np.quantile(omega_samps, [0.025, 0.5, 0.975])
array([0.9768, 1.0016, 1.0274])

11) Pitfalls#

  1. Parameter constraints

  • In the Nakagami model, \(m\ge 1/2\) and \(\Omega>0\).

  • Many libraries (including SciPy) also allow a loc shift; if you want the standard support \([0,\infty)\), fix loc=0.

  1. Parameterization mismatch

  • In this notebook, \(\Omega=\mathbb{E}[X^2]\).

  • In SciPy, scale = \sqrt{\Omega}.

  1. Numerical stability

  • Use gammaln (log-gamma) and work in log space when computing likelihoods.

  • When \(m\) is large, ratios like \(\Gamma(m+1/2)/\Gamma(m)\) should be computed via log-gamma differences.

  • If your data contain exact zeros (common due to rounding), terms like \(\log(x_i^2)\) can break MLE equations; treat zeros carefully or add a small jitter.

  1. Model fit

  • Nakagami is a good model for envelopes, but not for signed quantities.

  • Check diagnostics (histograms, QQ plots, predictive checks) and consider mixture/alternate models if the tail or near-zero behavior is off.

12) Summary#

  • Nakagami is a continuous distribution on \([0,\infty)\) with parameters shape \(m\ge 1/2\) and spread \(\Omega>0\).

  • It is a standard model for fading envelopes in wireless channels; \(m\) controls fading severity and \(\Omega=\mathbb{E}[X^2]\) is average power.

  • Key identity: \(X^2\sim\mathrm{Gamma}(m,\ \text{scale}=\Omega/m)\), which yields moments, CDF, and efficient sampling.

  • SciPy provides scipy.stats.nakagami (with scale=\sqrt{\Omega}) for pdf, cdf, rvs, and fit.

References#

  • Nakagami, M. (1960). The m-distribution — A general formula of intensity distribution of rapid fading.

  • Simon, M. K. and Alouini, M.-S. Digital Communication over Fading Channels (Nakagami-\(m\) fading).

  • Marsaglia, G. and Tsang, W. W. (2000). A Simple Method for Generating Gamma Variables.

  • SciPy documentation: scipy.stats.nakagami